Communication/Computation Tradeoffs in Consensus-Based Distributed Optimization
نویسندگان
چکیده
We study the scalability of consensus-based distributed optimization algorithms by considering two questions: How many processors should we use for a given problem, and how often should they communicate when communication is not free? Central to our analysis is a problem-specific value r which quantifies the communication/computation tradeoff. We show that organizing the communication among nodes as a k-regular expander graph [1] yields speedups, while when all pairs of nodes communicate (as in a complete graph), there is an optimal number of processors that depends on r. Surprisingly, a speedup can be obtained, in terms of the time to reach a fixed level of accuracy, by communicating less and less frequently as the computation progresses. Experiments on a real cluster solving metric learning and non-smooth convex minimization tasks demonstrate strong agreement between theory and practice.
منابع مشابه
Mathematical Programs for Belief Propagation and Consensus
This paper develops methods of distributed Bayesian hypothesis tests for fault detection and diagnosis that are based on belief propagation and optimization in graphical models. The main challenges in developing distributed statistical estimation algorithms are i) difficulties in ensuring convergence and consensus for solutions of distributed inference problems, ii) increasing computational cos...
متن کاملA Solution to Fastest Distributed Consensus Problem for Generic Star & K-cored Star Networks
—Distributed average consensus is the main mechanism in algorithms for decentralized computation. In distributed average consensus algorithm each node has an initial state, and the goal is to compute the average of these initial states in every node. To accomplish this task, each node updates its state by a weighted average of its own and neighbors' states, by using local communication between ...
متن کاملGeneralized Distributed Learning Under Uncertainty for Camera Networks
Consensus-based distributed learning is a machine learning problem used to find the general consensus of local learning models to achieve a global objective. It is an important problem with increasing level of interest due to its applications in sensor networks. There are many benefits of distributed learning over traditional centralized learning, such as faster computation and reduced communic...
متن کاملRandomized Constraints Consensus for Distributed Robust Linear Programming ?
In this paper we consider a network of processors aiming at cooperatively solving linear programming problems subject to uncertainty. Each node only knows a common cost function and its local uncertain constraint set. We propose a randomized, distributed algorithm working under time-varying, asynchronous and directed communication topology. The algorithm is based on a local computation and comm...
متن کاملGeneralized Distributed Learning and Applications
Consensus-based distributed learning is a machine learning problem to find the general consensus of the local learning models to achieve a global objective. It is an important problem with increasing level of interest due to its applications in sensor networks. There are many benefits of the distributed learning over traditional centralized learning, such as computation or communication cost. I...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2012